18 research outputs found
Neutrino Interferometry In Curved Spacetime
Gravitational lensing introduces the possibility of multiple (macroscopic)
paths from an astrophysical neutrino source to a detector. Such a multiplicity
of paths can allow for quantum mechanical interference to take place that is
qualitatively different to neutrino oscillations in flat space. After an
illustrative example clarifying some under-appreciated subtleties of the phase
calculation, we derive the form of the quantum mechanical phase for a neutrino
mass eigenstate propagating non-radially through a Schwarzschild metric. We
subsequently determine the form of the interference pattern seen at a detector.
We show that the neutrino signal from a supernova could exhibit the
interference effects we discuss were it lensed by an object in a suitable mass
range. We finally conclude, however, that -- given current neutrino detector
technology -- the probability of such lensing occurring for a
(neutrino-detectable) supernova is tiny in the immediate future.Comment: 25 pages, 1 .eps figure. Updated version -- with simplified notation
-- accepted for publication in Phys.Rev.D. Extra author adde
Parity Logging: Overcoming the Small Write Problem in Redundant Disk Arrays
Parity encoded redundant disk arrays provide highly reliable, cost effective secondary storage with high performance for read accesses and large write accesses. Their performance on small writes, however, is much worse than mirrored disks — the traditional, highly reliable, but expensive organization for secondary storage. Unfortunately, small writes are a substantial portion of the I/O workload of many important, demanding applications such as on-line transaction processing. This paper presents parity logging, a novel solution to the small write problem for redundant disk arrays. Parity logging applies journalling techniques to substantially reduce the cost of small writes. We provide a detailed analysis of parity logging and competing schemes — mirroring, floating storage, and RAID level 5 — and verify these models by simulation. Parity logging provides performance competitive with mirroring, the best of the alternative single failure tolerating disk array organizations. However, its overhead cost is close to the minimum offered by RAID level 5. Finally, parity logging can exploit data caching much more effectively than all three alternative approaches
Fast Interrupt Priority Management in Operating System Kernels
In this paper we describe a new, low-overhead technique for manipulating processor interrupt state in an operating system kernel. Both uniprocessor and multiprocessor operating systems protect against uniprocessor deadlock and data corruption by selectively enabling and disabling interrupts during critical sections. This happens frequently during latency-critical activities such as IPC, scheduling, and memory management. Unfortunately, the cycle cost of modifying the interrupt mask has increased by an order of magnitude in recent processor architectures. In this paper we describe optimistic interrupt protection, a technique which substantially reduces the cost of interrupt masking by optimizing mask manipulation for the common case of no interrupts. We present results for the Mach 3.0 microkernel operating system, although the technique is applicable to other kernel architectures, both micro and monolithic, that rely on interrupts to manage devices. 1 Introduction This paper describes..
A Redundant Disk Array Architecture for Efficient Small Writes
Parity encoded redundant disk arrays provide highly reliable, cost effective seconda high performance for reads and large writes. Their performance on small, is writes, much howe worse than mirrored disks — the traditional, highly reliable, but expensive organizat ary storage. Unfortunately, small writes are a substantial portion of the I/O workload of m tant, demanding applications such as on-line transaction processing. This parity paper pr logging, a novel solution to the small write problem for redundant disk arrays. Parity journalling techniques to substantially reduce the cost of e provide small writes. detailed W models of parity logging and competing schemes — mirroring, floating storage, and RAID level 5 — these models by simulation. Parity logging provides performance competitive with mi with capacity overhead close to the minimum offered by RAID, level parity 5. logging Finally can exploit data caching more effectively than all three alternative approaches
Informed Prefetching and Caching
The underutilization of disk parallelism and file cache buffers by traditional file systems induces I/O stall time that degrades the performance of modern microprocessor-based systems. In this paper, we present aggressive mechanisms that tailor file system resource management to the needs of I/O-intensive applications. In particular, we show how to use application-disclosed access patterns (hints) to expose and exploit I/O parallelism and to allocate dynamically file buffers among three competing demands: prefetching hinted blocks, caching hinted blocks for reuse, and caching recently used data for unhinted accesses. Our approach estimates the impact of alternative buffer allocations on application execution time and applies a cost-benefit analysis to allocate buffers where they will have the greatest impact. We implemented informed prefetching and caching in DEC’s OSF/1 operating system and measured its performance on a 150 MHz Alpha equipped with 15 disks running a range of applications including text search, 3D scientific visualization, relational database queries, speech recognition, and computational chemistry. Informed prefetching reduces the execution time of the first four of these applications by 20 % to 87%. Informed caching reduces the execution time of the fifth application by up to 30%
Fast Interrupt Priority Management in Operating System Kernels
In this paper we describe a new, low-overhead technique for manipulating processor interrupt state in an operating system kernel. Both uniprocessor and multiprocessor operating systems protect against uniprocessor deadlock and data corruption by selectively enabling and disabling interrupts during critical sections. This happens frequently during latency-critical activities such as IPC, scheduling, and memory management. Unfortunately, the cycle cost of modifying the interrupt mask has increased by an order of magnitude in recent processor architectures. In this paper we describe optimistic interrupt protection, a technique which substantially reduces the cost of interrupt masking by optimizing mask manipulation for the common case of no interrupts. We present results for the Mach 3.0 microkernel operating system, although the technique is applicable to other kernel architectures, both micro and monolithic, that rely on interrupts to manage devices. This research was sponsored by The D..
A Redundant Disk Array Architecture for Efficient Small Writes (CMU-CS-94-170)
Parity encoded redundant disk arrays provide highly reliable, cost effective secondary storage with high performance for reads and large writes. Their performance on small writes, however, is much worse than mirrored disks - the traditional, highly reliable, but expensive organization for second ary storage. Unfortunately, small writes are a substantial portion of the I/O workload of many impor tant, demanding applications such as on-line transaction processing. This paper presents parity logging, a novel solution to the small write problem for redundant disk arrays. Parity logging applies journalling techniques to substantially reduce the cost of small writes. We provide detailed models of parity logging and competing schemes - mirroring, floating storage, and RAID level 5 - and verify these models by simulation. Parity logging provides performance competitive with mirroring, but with capacity overhead close to the minimum offered by RAID level 5. Finally, parity logging can exploit data caching more effectively than all three alternative approaches
A Redundant Disk Array Architecture for Efficient Small Writes (CMU-CS-93-200)
Parity encoded redundant disk arrays provide highly reliable, cost effective secondary storage with high performance for reads and large writes. Their performance on small writes, however, is much worse than mirrored disks - the traditional, highly reliable, but expensive organization for secondary storage. Unfortunately, small writes are a substantial portion of the I/O workload of many important, demanding applications such as on-line transaction processing. This paper presents parity logging, a novel solution to the small write problem for redundant disk arrays. Parity logging applies journalling techniques to substantially reduce the cost of small writes. We provide a detailed analysis of parity logging and competing schemes - mirroring, floating storage, and RAID level 5 - and verify these models by simulation. Parity logging provides performance competitive with mirroring, the best of the alternative single failure tolerating disk array organizations. However, its overhead is close to the minimum offered by RAID level 5. Finally, parity logging can exploit data caching much more effectively than all three alternative approaches
Recommended from our members
A Transport Layer for Live Streaming in a Content Delivery Network
Streaming media on the internet has experienced rapid growth over the last few years and will continue to increase in importance as broadband technologies and authoring tools continue to improve. As the internet becomes an increasingly popular alternative to traditional communications media, internet streaming will become a significant component of many content providers’ communications strategy. Internet streaming, however, poses significant challenges for content providers since it has significant distribution problems. Scalability, quality, reliability, and cost are all issues that have to be addressed in a successful streaming media offering. Streaming Content Delivery Networks attempt to provide solutions to the bottlenecks encountered by streaming applications on the internet. However only a small number of them has been deployed and little is known about the internal organization of these systems. In this paper we discuss the design choices made during the evolution of Akamai’s Content Delivery Network for Streaming Media. In particular we look at the design choices made to ensure the network’s scalability, quality of delivered content, and reliability while keeping costs low. Performance studies conducted on the evolving system indicate that our design scores highly on all of the above categories